# Mixed Precision Training
Kanana Nano 2.1b Embedding
Kanana is a bilingual (Korean/English) language model series developed by Kakao, excelling in Korean tasks while remaining competitive in English tasks, significantly reducing computational costs compared to models of similar scale.
Large Language Model
Transformers Supports Multiple Languages

K
kakaocorp
7,722
20
Summarisation Model
This is a T5 text summarization model fine-tuned on the SAMSum dataset, suitable for dialogue and conversation summarization tasks.
Text Generation
Transformers

S
Saravanankumaran
16
0
Modern Bert Multilingual
Apache-2.0
ModernBertMultilingual is a multilingual model trained from scratch, supporting Chinese, English, Japanese, and Korean, with excellent performance in mixed East Asian language text tasks.
Large Language Model
Safetensors Supports Multiple Languages
M
neavo
333
20
Zlm B64 Le4 S8000
MIT
This model is a fine-tuned speech synthesis (TTS) model based on microsoft/speecht5_tts, primarily used for text-to-speech conversion tasks.
Speech Synthesis
Transformers

Z
mikhail-panzo
24
0
Dreambooth Diffusion Clay Cups
A text-to-image generation model fine-tuned with Keras Dreambooth, capable of producing images in Bengali clay art style
Image Generation
D
keras-dreambooth
13
0
Bert Finetuned Small Data Uia
Apache-2.0
This model is a fine-tuned version of bert-base-uncased on an unspecified dataset, suitable for natural language processing tasks.
Large Language Model
Transformers

B
eibakke
22
0
Vit Base Patch16 224 In21k Wwwwii
Apache-2.0
This model is a fine-tuned version of Google's ViT model on an unknown dataset, primarily used for image classification tasks.
Image Classification
Transformers

V
Zynovia
22
0
Wav2vec2 2 Gpt2 Regularisation
This is an automatic speech recognition (ASR) model trained on the LibriSpeech dataset, capable of converting English speech into text.
Speech Recognition
Transformers

W
sanchit-gandhi
20
0
Wav2vec2 Large Lv60h 100h 2nd Try
A wav2vec2-large-lv60 speech recognition model fine-tuned on the LibriSpeech dataset, supporting English speech-to-text tasks
Speech Recognition
Transformers

W
patrickvonplaten
20
0
Alphadelay
Apache-2.0
A speech recognition model fine-tuned based on facebook/wav2vec2-base, with a word error rate (WER) of 1.0
Speech Recognition
Transformers

A
renBaikau
17
0
Wav2vec2 2 Bart Base
A speech recognition model fine-tuned on the LibriSpeech ASR clean dataset, based on wav2vec2-base and bart-base
Speech Recognition
Transformers

W
patrickvonplaten
493
5
Transformers Qa
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD dataset based on distilbert-base-cased, specifically trained for the Keras.io question-answering tutorial.
Question Answering System
Transformers

T
keras-io
23
4
Test Model
Apache-2.0
ResNet50 v1.5 is an improved version of the original ResNet50 v1 model, achieving approximately 0.5% higher top1 accuracy by adjusting convolution strides.
Image Classification
Transformers

T
mchochowski
18
0
Featured Recommended AI Models